Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network

نویسندگان

چکیده

Contrastive loss has significantly improved performance in supervised classification tasks by using a multi-viewed framework that leverages augmentation and label information. The enables contrast with another view of single image but enlarges training time memory usage. To exploit the strength multi-views while avoiding high computation cost, we introduce multi-exit architecture outputs multiple features single-viewed framework. this end, propose Self-Contrastive (SelfCon) learning, which self-contrasts within from different levels network. efficiently replaces multi-augmented images various information layers We demonstrate SelfCon learning improves encoder network, empirically analyze its advantages terms single-view sub-network. Furthermore, provide theoretical evidence increase based on mutual bound. For ImageNet ResNet-50, accuracy +0.6% 59% 48% Supervised simple ensemble boosts up to +1.5%. Our code is available at https://github.com/raymin0223/self-contrastive-learning.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Time-Contrastive Networks: Self-Supervised Learning from Video

We propose a self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints, and study how this representation can be used in two robotic imitation settings: imitating object interactions from videos of humans, and imitating human poses. Imitation of human behavior requires a viewpoint-invariant representation that ca...

متن کامل

Contrastive Learning Using Spectral Methods

In many natural settings, the analysis goal is not to characterize a single data set in isolation, but rather to understand the difference between one set of observations and another. For example, given a background corpus of news articles together with writings of a particular author, one may want a topic model that explains word patterns and themes specific to the author. Another example come...

متن کامل

Weakly-Supervised Learning with Cost-Augmented Contrastive Estimation

We generalize contrastive estimation in two ways that permit adding more knowledge to unsupervised learning. The first allows the modeler to specify not only the set of corrupted inputs for each observation, but also how bad each one is. The second allows specifying structural preferences on the latent variable used to explain the observations. They require setting additional hyperparameters, w...

متن کامل

On Contrastive Divergence Learning

Maximum-likelihood (ML) learning of Markov random fields is challenging because it requires estimates of averages that have an exponential number of terms. Markov chain Monte Carlo methods typically take a long time to converge on unbiased estimates, but Hinton (2002) showed that if the Markov chain is only run for a few steps, the learning can still work well and it approximately minimizes a d...

متن کامل

Contrastive Learning for Image Captioning

Image captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learn...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i1.25091